• Home
  • Publications
  • Demos
  • CV
Zollh&oumlfer, Michael
Michael Zollhoefer
Director, Research Scientist

Reality Labs Research (RLR)
District Fifteen: 131 15th Street
Pittsburgh, PA 15222
United States of America

Email: zollhoefer@meta.com
RL Meta

The goal of my research is to enable fully immersive remote communication and interaction in the virtual world at a level that is indistinguishable from reality. One challenge is photo-realistic digitization and efficient rendering of digital humans. I develop key technology that combines fundamental computer vision, machine learning, and graphics techniques in a new neural capture and rendering paradigm. I believe that these novel neural rendering techniques will bring immersive communication in virtual and augmented reality one step closer to reality, and thus completely change the way we communicate in the future.

We are always looking for new talent!
Feel free to drop me an email if you are looking for a job or an internship.

Latest News:

  • 11/2023: Two papers (1, 2) accepted to 3DV 2024   —   new

  • 10/2023: One paper (1) accepted to NeurIPS 2023   —   new

  • 10/2023: Two papers (1, 2) accepted to SGA 2023   —   new

  • 07/2023: One paper (1) accepted to SCA 2023   —   new

  • 06/2023: Three papers (1, 2, 3) accepted to CVPR 2023

  • 07/2022: Four papers (1, 2, 3, 4) accepted to ECCV 2022

  • 06/2022: One paper (1) accepted to SG 2022

  • 04/2022: One paper (1) accepted to RSS 2022

  • 03/2022: Five papers (1, 2, 3, 4, 5) accepted to CVPR 2022

  • 02/2022: One STAR report (1) accepted to EG 2022

  • 10/2021: One paper (1) accepted to NeurIPS 2021

  • 08/2021: Two papers (1, 2) accepted to ICCV 2021

  • 06/2021: One paper (1) accepted to TPAMI

  • 05/2021: Two papers (1, 2) accepted to Siggraph 2021

  • 04/2021: Five papers accepted to CVPR 2021. Four Orals (1, 2, 3, 4) and one Poster (5)

  • 02/2021: Three papers (1, 2, 3) accepted to Siggraph 2021 (via ToG)

  • 10/2020: One paper (1) accepted to NeurIPS 2020

  • 08/2020: One paper (1) accepted to Siggraph Asia 2020

  • 08/2020: Two papers (1, 2) accepted to ECCV 2020

  • 07/2020: One paper (1) accepted to TOG 2020

  • 06/2020: One tutorial (1) accepted to CVPR 2020

  • 05/2020: One paper (1) accepted to TVCG 2020

  • 04/2020: Three papers (1, 2, 3) accepted to CVPR 2020

  • 03/2020: One STAR report (1) accepted to EG 2020

  • 02/2020: Organizing the DynaVis workshop (1) at CVPR 2020

  • 01/2020: One paper (1) accepted to ICLR 2020

  • 12/2019: Our paper (1) received a NeurIPS Honorable Mention Outstanding New Directions Award

  • 09/2019: I joined Facebook Reality Labs (FRL) in Pittsburgh as a Research Scientist

  • 09/2019: One paper (1) accepted to NeurIPS 2019 as oral presentation

  • 08/2019: One paper (1) accepted to Siggraph Asia 2019

  • 06/2019: Press release covering our text-based video editing approach (1)

  • 04/2019: I will be co-teaching CS448V at Stanford with M. Agrawala and O. Fried

  • 04/2019: Three papers (1, 2, 3) accepted to Siggraph 2019

  • 03/2019: Our paper (1) received an IEEE VR Best Journal Paper Honorable Mention

  • 03/2019: One paper (1) accepted to ACM TOG (Siggraph 2019)

  • 03/2019: Two papers (1, 2) accepted to CVPR 2019

  • 02/2019: Organizing the DynaVis workshop (1) at CVPR 2019

  • 01/2019: One paper (1) accepted to ACM TOG (Siggraph 2019)

  • 01/2019: One paper (1) accepted to CACM 2019

  • 11/2018: One paper (1) accepted to IEEE VR 2019

  • 09/2018: One paper (1) accepted to TPAMI 2018

  • 08/2018: One paper (1) accepted to GCPR 2018

  • 05/2018: One tutorial (1) accepted to ECCV 2018

  • 03/2018: Two papers (1, 2) accepted to SIGGRAPH 2018

  • 02/2018: Two STAR reports (1, 2) accepted to EG 2018

  • 02/2018: Three papers (1, 2, 3) accepted to CVPR 2018

  • 02/2018: Two papers (1, 2) accepted to TOG 2018

  • 11/2017: I joined Stanford University as visiting assistant professor

  • 07/2017: One paper (1) accepted to ICCV 2017

  • 06/2017: I received an MPC-VCC fellowship

  • 06/2017: One paper (1) accepted to ISMAR 2017

Copyright © Michael Zollhoefer